Initial Analysis of OpenReview API Security Incident
The staff of OpenReview are deeply troubled by the exploitation discovered on November 27. Trust and anonymity are fundamental to our community. OpenReview is taking concrete steps to reinforce the protections on which we all depend and to thoroughly investigate this incident.
After becoming aware of the vulnerability and issuing a patch that morning, the team implemented our incident response process, engaging an external cybersecurity and forensic investigation firm, conducting code audits, analyzing log data to determine scope of the unauthorized activity, and appropriately communicating and collaborating with publication venues—all with the goal of supporting the community as effectively and transparently as possible.
Below are some preliminary findings from a point-in-time analysis based on currently available information. Please note that analysis is ongoing, and these figures and conclusions may change as we continue our work and as the forensic investigation proceeds. Based on data analyzed to date, we believe:
- Approximately 97% of OpenReview’s 3,203 venues were unaffected by this incident.
- Of the remaining 3%, about half had four or fewer papers queried.
- Among the remaining 1.5% (approximately 50 venues) with more papers queried, most activity appears to have come from individuals probing a small number of papers.
- As is now widely known, ICLR 2026 experienced an automated spidering attack that collated and then posted reviewer-identity information. We are actively engaging with internet platforms, law enforcement, and other institutions to have this data removed, and to address the actions of those responsible.
Any improper access to reviewer identities violates the OpenReview Terms of Service, multiple Codes of Conduct, and undermines community norms. Anyone who obtained such data or its derivatives should delete all copies and refrain from further use or dissemination.
This incident also occurs in a broader context. Our AI research community is increasingly under deliberate attack. Across the AI research ecosystem, there has been a documented rise in attempts to compromise research systems, illegally scrape data at scale, create fraudulent identities, and break review systems. Universities, corporations, research organizations, conference systems, and software infrastructure supporting AI/ML work have all reported increased probing and cyber intrusion attempts.
The strength of our scientific community has always depended on shared norms of integrity, goodwill, and mutual respect. Moments like this remind us how important those norms are, and how vital it is that we uphold them together. We ask every member of the community to approach this moment with restraint, fairness, and empathy for colleagues who may be affected. In doing so, we uphold both our values and the foundation on which open, collaborative, and rigorous scientific inquiry depends.